I have a FAS8300 (no disks), a DS4246 shelf (24 x 200 GB SSD) and a DS460 shelf (60 x 4 TB SAS). These were all previously in use in other clusters and I'm trying to set them up as a new switchless cluster. At the boot menu, I chose option 4 ("Clean configuration and initialize all disks"). However, ONTAP failed to boot, saying the following: BOOTMGR: The system has 2 disks assigned whereas it needs 3 to boot, will try to assign the required number. May 30 19:20:55 [localhost:raid.autoPart.disabled:ALERT]: Disk auto-partitioning is disabled on this system: the system needs a minimum of 8 usable hard disks. BOOTMGR: already_assigned=2, min_to_boot=3, num_assigned=0 May 30 19:20:55 [localhost:callhome.raid.adp.disabled:notice]: Disk auto-partitioning is disabled on this system: ADP DISABLED. May 30 19:20:55 [localhost:diskown.split.shelf.assignStatus:notice]: Split-shelf based automatic drive assignment is "disabled". May 30 19:20:55 [localhost:cf.fm.noMBDisksOrIc:error]: Could not find the local mailbox disks. Could not determine the firmware state of the partner through the HA interconnect. . Terminated Looking at the connections, I see link lights between the two shelves but I don't see link lights on either shelf on the ports connected to the FAS8300. I did the cabling per this setup guide: https://docs.netapp.com/us-en/ontap-systems/media/PDF/215-14512_2021-02_en-us_FAS8300orFAS8700_ISI.pdf Controller to shelf: Node 1, port 0a --> DS4246, IOM A square port Node 2, port 0a --> DS4246, IOM B, square port Node 1, port 0d --> DS460, IOM B, port 3 Node 2, port 0d --> DS460, IOM A, port 3 Shelf to shelf: DS4246, IOM A, circle port --> DS460, IOM A, port 1 DS4246, IOM B, circle port --> DS460, IOM B, port 1 Any ideas what I'm doing wrong here?
... View more
I have an old NetApp and a new one. Both are members of the same cluster and connected to a pair of cluster switches. I'm in the process of migrating data from one system to the other. I want to be sure that migrated volumes are being accessed directly via the LIFs on the new system, rather than via the old LIFs, with requests traveling over the cluster switch network. However, I haven't found a way to be able to do this. Is there an ONTAP command somewhere that will take a volume as the input and show what's connecting to it and which node and LIF that connection is using? NOTE: My question applies to NFS/CIFS/SMB volumes specifically.
... View more
I need to create interface groups on ports e1a & e1b on my new NetApp nodes. However, I currently only have one connection per node available until I decom the old NetApp, after which I will be able to use its connections. Is it possible to create a multimode LACP interface group and temporarily add port e1a to it, then add port e1b later on, after I decom the old system? More importantly, will that single-port connection be safe to use? Or will data transport be impacted as the interface group attempts and fails to route connections over the not-yet-existent second port?
... View more
I have an older NetApp and I recently added a new one to the cluster. Both are connected to a pair of Fibre Channel (FC) switches. When connecting the new system, I added its WWPNs to all the zones on the FC switches. (Both NetApps are also connected to a pair of cluster switches.) Meanwhile, I have a bunch of VMware ESXi hosts connected to the same pair of FC switches. With this all set up and ready to go, I migrated the LUNs from the older NetApp to the new (using "volume move start ..."). All good so far. Then I noticed something strange in vCenter: when looking at the paths for the LUNs, it only shows the WWPNs for the old NetApp. My assumption, therefore, is that the hosts are accessing the LUNs indirectly, via the cluster switch network. That's not good. As a test, I created a new LUN on the new NetApp and connected it to the hosts. But this time, vCenter shows the WWPNs for the new nodes, as expected. What's going on here? Is there something I need to do to give the LUNs a kick, so they start using the new paths? (Maybe I need to unmount them from the hosts and remount them?) Also, if I do nothing, will they automatically start using the new paths when I eventually shut down the old NetApp?
... View more
I recently upgraded to 9.12 after we replace an old spinning disk 2552 with a C250. That came with a bunch of GUI changes I'm still trying to get used to. One thing that I'm struggling with is creating new LUNs or Volumes. In the previous version I could select the tier (used to be called aggregates) the volume gets created on but in 9.12 there is no option... it only seems to want to put it on the older tiers (meaning the ones I had before). Also, previously I could create a new LUN and specify what volume I wanted it on. Not in the new GUI. If I try and create a new LUN, it creates the volume and gives it the same name as the LUN. I was able to fumble my way through and accomplish what I want by going through the CLI, but that is a bit onerous as I'm not in there enough to actually have a great grasp on the commands. Did NetApp hire a bunch of Unix guys that think "CLI OR DIE!"? (I say that half joking as I used to work with some Unix guys that felt if you didn't use CLI you weren't a computer person).
... View more